Generating realistic lip motion from audio to simulate speech production is critical for driving natural character animation. Previous research has shown that traditional metrics used to optimize and assess models for generating lip motion from speech are not a good indicator of subjective opinion of animation quality. Devising metrics that align with subjective opinion first requires understanding what impacts human perception of quality. In this work, we focus on the degree of articulation and run a series of experiments to study how articulation strength impacts human perception of lip motion accompanying speech. Specifically, we study how increasing under-articulated (dampened) and over-articulated (exaggerated) lip motion affects human perception of quality. We examine the impact of articulation strength on human perception when considering only lip motion, where viewers are presented with talking faces represented by landmarks, and in the context of embodied characters, where viewers are presented with photo-realistic videos. Our results show that viewers prefer over-articulated lip motion consistently more than under-articulated lip motion and that this preference generalizes across different speakers and embodiments.
translated by 谷歌翻译
Autonomous navigation in crowded spaces poses a challenge for mobile robots due to the highly dynamic, partially observable environment. Occlusions are highly prevalent in such settings due to a limited sensor field of view and obstructing human agents. Previous work has shown that observed interactive behaviors of human agents can be used to estimate potential obstacles despite occlusions. We propose integrating such social inference techniques into the planning pipeline. We use a variational autoencoder with a specially designed loss function to learn representations that are meaningful for occlusion inference. This work adopts a deep reinforcement learning approach to incorporate the learned representation for occlusion-aware planning. In simulation, our occlusion-aware policy achieves comparable collision avoidance performance to fully observable navigation by estimating agents in occluded spaces. We demonstrate successful policy transfer from simulation to the real-world Turtlebot 2i. To the best of our knowledge, this work is the first to use social occlusion inference for crowd navigation.
translated by 谷歌翻译
贝叶斯正交(BQ)是一种基于型号的数值集成方法,可以通过编码和利用手头的集成任务的已知结构来提高样本效率。在本文中,我们探讨了在输入域中的一组基本变换下编码积分的不变性,特别是一些酉变换,例如旋转,轴翻转或点对称。与若干合成和一个现实世界应用相比,我们展示了卓越的性能的初步结果。
translated by 谷歌翻译
自治车辆必须推理城市环境中的空间闭塞,以确保安全性而不会过于谨慎。前工作探索了观察到的道路代理人的社会行为的闭塞推动,因此将人们视为传感器。从代理行为推断出占用是一种固有的多模式问题;驾驶员可以同样地表现出与它们之前的不同占用模式类似(例如,驾驶员可以以恒定速度或在开放的道路上移动)。然而,过去的工作不考虑这种多层性,从而忽略了在驾驶员行为及其环境之间的关系中模拟了这种梯级不确定性的来源。我们提出了一种遮挡推理方法,其特征是观察人员的行为作为传感器测量,并将它们与标准传感器套件的熔断器融合。为了捕获炼泥的不确定性,我们用离散的潜在空间训练一个条件变形AutoEncoder,以学习从观察到的驾驶员轨迹到驾驶员前方视图的占用网格表示的多模式映射。我们的方法处理多代理场景,使用证据理论将来自多个观察到的驱动因素的测量结果组合以解决传感器融合问题。我们的方法在真实的数据集中验证,表现出基线,并展示实时能力的性能。我们的代码可在https://github.com/sisl/multiagentvarizingalocclusionInferience获得。
translated by 谷歌翻译
预测环境的未来占用状态对于实现自动驾驶汽车的明智决定很重要。占用预测中的常见挑战包括消失的动态对象和模糊的预测,尤其是对于长期预测范围。在这项工作中,我们提出了一个双独沟的神经网络体系结构,以预测占用状态的时空演化。一个插脚致力于预测移动的自我车辆将如何观察到静态环境。另一个插脚预测环境中的动态对象将如何移动。在现实Waymo开放数据集上进行的实验表明,两个插脚的融合输出能够保留动态对象并减少预测中比基线模型更长的预测时间范围。
translated by 谷歌翻译